使用不平衡数据集的二进制分类具有挑战性。模型倾向于将所有样本视为属于多数类的样本。尽管现有的解决方案(例如抽样方法,成本敏感方法和合奏学习方法)提高了少数族裔类别的准确性,但这些方法受到过度拟合问题或难以决定的成本参数的限制。我们提出了HADR,这是一种降低尺寸的混合方法,包括数据块构建,降低性降低和与深度神经网络分类器的合奏学习。我们评估了八个不平衡的公共数据集的性能,从召回,g均值和AUC方面。结果表明,我们的模型优于最先进的方法。
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
We present a method for providing statistical guarantees on runtime safety and goal reachability for integrated planning and control of a class of systems with unknown nonlinear stochastic underactuated dynamics. Specifically, given a dynamics dataset, our method jointly learns a mean dynamics model, a spatially-varying disturbance bound that captures the effect of noise and model mismatch, and a feedback controller based on contraction theory that stabilizes the learned dynamics. We propose a sampling-based planner that uses the mean dynamics model and simultaneously bounds the closed-loop tracking error via a learned disturbance bound. We employ techniques from Extreme Value Theory (EVT) to estimate, to a specified level of confidence, several constants which characterize the learned components and govern the size of the tracking error bound. This ensures plans are guaranteed to be safely tracked at runtime. We validate that our guarantees translate to empirical safety in simulation on a 10D quadrotor, and in the real world on a physical CrazyFlie quadrotor and Clearpath Jackal robot, whereas baselines that ignore the model error and stochasticity are unsafe.
translated by 谷歌翻译
Diffusion models are state-of-the-art deep learning empowered generative models that are trained based on the principle of learning forward and reverse diffusion processes via progressive noise-addition and denoising. To gain a better understanding of the limitations and potential risks, this paper presents the first study on the robustness of diffusion models against backdoor attacks. Specifically, we propose BadDiffusion, a novel attack framework that engineers compromised diffusion processes during model training for backdoor implantation. At the inference stage, the backdoored diffusion model will behave just like an untampered generator for regular data inputs, while falsely generating some targeted outcome designed by the bad actor upon receiving the implanted trigger signal. Such a critical risk can be dreadful for downstream tasks and applications built upon the problematic model. Our extensive experiments on various backdoor attack settings show that BadDiffusion can consistently lead to compromised diffusion models with high utility and target specificity. Even worse, BadDiffusion can be made cost-effective by simply finetuning a clean pre-trained diffusion model to implant backdoors. We also explore some possible countermeasures for risk mitigation. Our results call attention to potential risks and possible misuse of diffusion models.
translated by 谷歌翻译
We tackle the problem of target-free text-guided image manipulation, which requires one to modify the input reference image based on the given text instruction, while no ground truth target image is observed during training. To address this challenging task, we propose a Cyclic-Manipulation GAN (cManiGAN) in this paper, which is able to realize where and how to edit the image regions of interest. Specifically, the image editor in cManiGAN learns to identify and complete the input image, while cross-modal interpreter and reasoner are deployed to verify the semantic correctness of the output image based on the input instruction. While the former utilizes factual/counterfactual description learning for authenticating the image semantics, the latter predicts the "undo" instruction and provides pixel-level supervision for the training of cManiGAN. With such operational cycle-consistency, our cManiGAN can be trained in the above weakly supervised setting. We conduct extensive experiments on the datasets of CLEVR and COCO, and the effectiveness and generalizability of our proposed method can be successfully verified. Project page: https://sites.google.com/view/wancyuanfan/projects/cmanigan.
translated by 谷歌翻译
新颖的对象字幕(NOC)旨在描述包含对象的图像,而无需在训练过程中观察其地面真相标题。由于缺乏字幕注释,无法通过序列到序列训练或苹果酒优化直接优化字幕模型。结果,我们提出了启用释义(P2C),这是一个针对NOC的两阶段学习框架,它将通过释义通过释义来优化输出字幕。使用P2C,字幕模型首先从仅在文本语料库中预先训练的语言模型中学习释义,从而扩展了Bank一词以提高语言流利度。为了进一步实施足够描述输入图像的视觉内容的输出字幕,我们对引入的忠诚度和充分性目标进行字幕模型执行自我贴形。由于在训练过程中没有任何地面真相标题可用于新颖的对象图像,因此我们的P2C利用交叉模式(图像文本)关联模块可以确保可以正确保留上述字幕特征。在实验中,我们不仅表明我们的P2C在NOCAPS和COCO字幕数据集上实现了最先进的性能,而且还通过替换NOC的语言和跨模式关联模型来验证学习框架的有效性和灵活性。实施详细信息和代码可在补充材料中找到。
translated by 谷歌翻译
分散的多基金会计划一直是机器人技术研究的重要领域。该领域中有趣且有影响力的应用是在未结构化的道路环境中分散的车辆协调。例如,在十字路口中,在没有中央协调员的情况下,在相交路径的多个车辆上解除多种车辆是有用的。我们从常识中学到的是,要使车辆浏览这种未建筑的环境,驾驶员必须理解并符合附近驾驶员观察到的隐式“社会礼节”。为了研究这种隐式驾驶协议,我们收集了伯克利DeepDrive无人机数据集。该数据集包含1)一组航空视频记录未结构化驾驶,2)图像和注释的集合来训练车辆检测模型,3)一个用于说明典型用法的开发脚本套件。我们认为,该数据集是研究人类驱动因素和次要兴趣的分散多种计划的主要兴趣,用于遥感环境中的计算机视觉。
translated by 谷歌翻译
逆文本归一化(ITN)用于将自动语音识别(ASR)系统的口语输出转换为书面形式。传统手工制作的ITN规则可以复杂地转录和维护。同时,神经建模方法需要与ASR系统相同或相似的域(内域数据)中的质量大规模口语写作示例。这两种方法都需要昂贵且复杂的注释。在本文中,我们提出了一种数据增强技术,该技术可有效地从室外文本数据中产生丰富的口语写入数字对,并以最少的人类注释。我们从经验上证明,使用我们的数据增强技术训练的ITN模型始终超过ITN模型,该模型仅使用14.44%的总体准确性,仅在所有数字表面(例如红衣主教,货币和分数)上使用内域数据进行训练。
translated by 谷歌翻译
我们为一类不确定的控制型非线性系统提供了一种运动计划算法,该系统可以在使用高维传感器测量值(例如RGB-D图像)和反馈控制循环中的学习感知模块时确保运行时安全性和目标达到性能。首先,给定状态和观察数据集,我们训练一个感知系统,该系统试图从观察结果中倒入状态的一部分,并估计感知错误上的上限,该误差有效,在数据附近有可信赖的域中具有很高的概率。接下来,我们使用收缩理论来设计稳定的状态反馈控制器和收敛的动态观察者,该观察者使用学习的感知系统来更新其状态估计。当该控制器在动力学和不正确状态估计中遇到错误时,我们会在轨迹跟踪误差上得出一个绑定。最后,我们将此绑定到基于采样的运动计划器中,引导它返回可以使用传感器数据在运行时安全跟踪的轨迹。我们展示了我们在4D汽车上模拟的方法,6D平面四极管以及使用RGB(-D)传感器测量的17D操纵任务,这表明我们的方法安全可靠地将系统转向了目标,而无法考虑的基线,这些基线无法考虑。受信任的域或状态估计错误可能不安全。
translated by 谷歌翻译
我们在信息理论安全保证下为高斯窃听通道设计了简短的区块长度代码。我们的方法在于将代码设计中的可靠性和保密性限制解耦。具体而言,我们通过自动编码器处理可靠性约束,并处理具有哈希功能的保密约束。对于小于或等于16的区块长度,我们通过模拟合法接收器的错误概率以及我们的代码构建中的窃听器的泄漏进行评估。这种泄漏被定义为机密信息和窃听通道观察之间的共同信息,并通过基于神经网络的共同信息估计器进行经验测量。我们的仿真结果提供了具有正面保密率的代码的示例,这些代码优于高斯窃听通道的非结构性可获得的最知名的保密率。此外,我们表明我们的代码设计适用于化合物和任意变化的高斯窃听通道,为此,通道统计信息不是完全知道的,但仅属于预先指定的不确定性集。这些模型不仅捕获了与渠道统计估计有关的不确定性,而且还捕获了窃听器堵塞合法传输或通过更改其位置来影响其自身渠道统计的场景。
translated by 谷歌翻译